As a neural network compression technique, post-training quantization (PTQ) transforms a pre-trained model into a quantized model using a lower-precision data type. However, the prediction accuracy will decrease because of the quantization noise, especially in extremely low-bit settings. How to determine the appropriate quantization parameters (e.g., scaling factors and rounding of weights) is the main problem facing now. Many existing methods determine the quantization parameters by minimizing the distance between features before and after quantization. Using this distance as the metric to optimize the quantization parameters only considers local information. We analyze the problem of minimizing local metrics and indicate that it would not result in optimal quantization parameters. Furthermore, the quantized model suffers from overfitting due to the small number of calibration samples in PTQ. In this paper, we propose PD-Quant to solve the problems. PD-Quant uses the information of differences between network prediction before and after quantization to determine the quantization parameters. To mitigate the overfitting problem, PD-Quant adjusts the distribution of activations in PTQ. Experiments show that PD-Quant leads to better quantization parameters and improves the prediction accuracy of quantized models, especially in low-bit settings. For example, PD-Quant pushes the accuracy of ResNet-18 up to 53.08% and RegNetX-600MF up to 40.92% in weight 2-bit activation 2-bit. The code will be released at https://github.com/hustvl/PD-Quant.
translated by 谷歌翻译
Existing natural language understanding (NLU) models often rely on dataset biases rather than intended task-relevant features to achieve high performance on specific datasets. As a result, these models perform poorly on datasets outside the training distribution. Some recent studies address the above issue by reducing the weights of biased samples during the training process. However, these methods still encode biased latent features in representations and neglect the dynamic nature of bias, which hinders model prediction. We propose an NLU debiasing method, named debiasing contrastive learning (DCT), to simultaneously alleviate the above problems based on contrastive learning. We devise a debiasing positive sampling strategy to mitigate biased latent features by selecting the least similar biased positive samples. We also propose a dynamic negative sampling strategy to capture the dynamic influence of biases by employing a bias-only model to dynamically select the most similar biased negative samples. We conduct experiments on three NLU benchmark datasets. Experimental results show that DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance. We also verify that DCT can reduce biased latent features from the model's representations.
translated by 谷歌翻译
自动化技术(例如人工智能(AI)和机器人技术)的快速进步构成了越来越多的职业自动化风险,可能会对劳动力市场产生重大影响。最近的社会经济研究表明,接下来的十年中,将近50%的职业处于自动化的高风险。但是,缺乏颗粒状数据和经验知情的模型限制了这些研究的准确性,并使预测哪些工作将是自动化的。在本文中,我们通过在自动化和非自动化职业之间执行分类任务来研究职业的自动化风险。可用信息是由标准职业分类(SOC)分类的910个职业的任务声明,技能和互动。要充分利用此信息,我们提出了一个基于图的半监督分类方法,名为\ textbf {a} utomated \ textbf {o} ccupation \ textbf {c}基于\ textbf {g} rassification \ textbf {n} etworks(\ textbf {aoc-gcn})识别职业的自动化风险。该模型集成了一个异质图,以捕获职业的本地和全球环境。结果表明,我们提出的方法通过考虑职业的内部特征及其外部互动的信息来优于基线模型。这项研究可以帮助决策者在进入就业市场之前确定潜在的自动化职业并支持个人的决策。
translated by 谷歌翻译
半监督异常检测(AD)是一种数据挖掘任务,旨在从部分标记的数据集中学习功能,以帮助检测异常值。在本文中,我们将现有的半监督AD方法分为两类:无监督和基于监督的基于监督的,并指出其中大多数人对标记数据的利用不足和未经标记的数据的探索不足。为了解决这些问题,我们提出了深度的异常检测和搜索(DADS),该检测(DADS)应用了增强学习(RL)以平衡剥削和探索。在培训过程中,代理商通过层次结构的数据集搜索可能的异常情况,并使用搜索异常来增强性能,从本质上讲,这本质上从合奏学习的想法中汲取了教训。在实验上,我们将DAD与利用标记已知异常的标记为检测其他已知异常和未知异常的几种最新方法进行了比较。结果表明,爸爸可以从未标记的数据中有效,精确地搜索异常,并向它们学习,从而实现良好的性能。
translated by 谷歌翻译
过度参数化的模型,通常是预训练的语言模型(LMS),由于其小的学习偏见,具有吸引人的表现力。但是,LMS的巨大学习能力也会导致较大的学习差异。在一项试点研究中,我们发现,当面对多个领域时,参数的关键部分以特定于领域的方式出乎意料地行为,而其他参数则在域中行为。在这种现象中,我们首次认为,域总参数可以支撑一个可以源自原始LM的域总LM。为了揭示域总LM,我们建议通过演奏彩票(称为Doge门票)来识别域总参数。为了干预彩票,我们提出了一个域总分,该得分描述了参数与方差相关联的方式。全面的实验是在亚马逊,MNLI和Ontonotes数据集上进行的。结果表明,与一系列竞争基线相比,Doge门票获得了改进的室外概括。分析结果进一步暗示了域总参数的存在和门票票的性能一致性。
translated by 谷歌翻译
将现有的旅游照片从部分捕获的场景扩展到完整的场景是摄影应用的理想体验之一。尽管对照片的外推进行了充分的研究,但是将照片(即自拍照)从狭窄的视野推断到更广阔的视野,同时保持相似的视觉样式是更具挑战性的。在本文中,我们提出了一个分解的神经重新渲染模型,以从混乱的户外互联网照片集中产生逼真的新颖观点,该视图可以使应用程序包括可控场景重新渲染,照片外推甚至外推3D照片生成。具体而言,我们首先开发出一种新颖的分解重新渲染管道,以处理几何,外观和照明分解中的歧义。我们还提出了一种合成的培训策略,以应对互联网图像中意外的阻塞。此外,为了推断旅游照片时增强照片现实主义,我们提出了一个新颖的现实主义增强过程来补充外观细节,该过程会自动传播质地细节,从狭窄的捕获照片到外推神经渲染图像。室外场景上的实验和照片编辑示例证明了我们在照片现实主义和下游应用中提出的方法的出色性能。
translated by 谷歌翻译
最近,自主驾驶社会上有许多进展,吸引了学术界和工业的很多关注。然而,现有的作品主要专注于汽车,自动驾驶卡车算法和模型仍然需要额外的开发。在本文中,我们介绍了智能自动驾驶卡车系统。我们所呈现的系统由三个主要组成部分组成,1)一个现实的交通仿真模块,用于在测试场景中产生现实的交通流量,2)设计和评估了在现实世界部署中模仿实际卡车响应的高保真卡车模型,3 )具有基于学习的决策算法和多模轨迹策划仪的智能计划模块,考虑到卡车的约束,道路斜率变化和周围的交通流量。我们为每个组分单独提供定量评估,以证明每个部件的保真度和性能。我们还将我们的建议系统部署在真正的卡车上,并进行真实的世界实验,表明我们的系统能力缓解了SIM-TO-REAL差距。我们的代码可以在https://github.com/inceptioresearch/iits提供
translated by 谷歌翻译
作为在线广告和标记的关键组成部分,点击率(CTR)预测引起了行业和学术界领域的许多关注。最近,深度学习已成为CTR的主流方法论。尽管做出了可持续的努力,但现有的方法仍然构成了一些挑战。一方面,功能之间的高阶相互作用尚未探索。另一方面,高阶相互作用可能会忽略低阶字段的语义信息。在本文中,我们提出了一种名为Fint的新型预测方法,该方法采用了现场感知的交互层,该层捕获了高阶功能交互,同时保留了低阶现场信息。为了凭经验研究金融的有效性和鲁棒性,我们对三个现实数据库进行了广泛的实验:KDD2012,Criteo和Avazu。获得的结果表明,与现有方法相比,该五颗粒可以显着提高性能,而无需增加所需的计算量。此外,提出的方法通过A/B测试使大型在线视频应用程序的广告收入增加了约2.72 \%。为了更好地促进CTR领域的研究,我们发布了我们的代码以及参考实施,网址为:https://github.com/zhishan01/fint。
translated by 谷歌翻译
Managing novelty in perception-based human activity recognition (HAR) is critical in realistic settings to improve task performance over time and ensure solution generalization outside of prior seen samples. Novelty manifests in HAR as unseen samples, activities, objects, environments, and sensor changes, among other ways. Novelty may be task-relevant, such as a new class or new features, or task-irrelevant resulting in nuisance novelty, such as never before seen noise, blur, or distorted video recordings. To perform HAR optimally, algorithmic solutions must be tolerant to nuisance novelty, and learn over time in the face of novelty. This paper 1) formalizes the definition of novelty in HAR building upon the prior definition of novelty in classification tasks, 2) proposes an incremental open world learning (OWL) protocol and applies it to the Kinetics datasets to generate a new benchmark KOWL-718, 3) analyzes the performance of current state-of-the-art HAR models when novelty is introduced over time, 4) provides a containerized and packaged pipeline for reproducing the OWL protocol and for modifying for any future updates to Kinetics. The experimental analysis includes an ablation study of how the different models perform under various conditions as annotated by Kinetics-AVA. The protocol as an algorithm for reproducing experiments using the KOWL-718 benchmark will be publicly released with code and containers at https://github.com/prijatelj/human-activity-recognition-in-an-open-world. The code may be used to analyze different annotations and subsets of the Kinetics datasets in an incremental open world fashion, as well as be extended as further updates to Kinetics are released.
translated by 谷歌翻译
Most action recognition datasets and algorithms assume a closed world, where all test samples are instances of the known classes. In open set problems, test samples may be drawn from either known or unknown classes. Existing open set action recognition methods are typically based on extending closed set methods by adding post hoc analysis of classification scores or feature distances and do not capture the relations among all the video clip elements. Our approach uses the reconstruction error to determine the novelty of the video since unknown classes are harder to put back together and thus have a higher reconstruction error than videos from known classes. We refer to our solution to the open set action recognition problem as "Humpty Dumpty", due to its reconstruction abilities. Humpty Dumpty is a novel graph-based autoencoder that accounts for contextual and semantic relations among the clip pieces for improved reconstruction. A larger reconstruction error leads to an increased likelihood that the action can not be reconstructed, i.e., can not put Humpty Dumpty back together again, indicating that the action has never been seen before and is novel/unknown. Extensive experiments are performed on two publicly available action recognition datasets including HMDB-51 and UCF-101, showing the state-of-the-art performance for open set action recognition.
translated by 谷歌翻译